17 research outputs found

    A Fast and Scalable Graph Coloring Algorithm for Multi-core and Many-core Architectures

    Full text link
    Irregular computations on unstructured data are an important class of problems for parallel programming. Graph coloring is often an important preprocessing step, e.g. as a way to perform dependency analysis for safe parallel execution. The total run time of a coloring algorithm adds to the overall parallel overhead of the application whereas the number of colors used determines the amount of exposed parallelism. A fast and scalable coloring algorithm using as few colors as possible is vital for the overall parallel performance and scalability of many irregular applications that depend upon runtime dependency analysis. Catalyurek et al. have proposed a graph coloring algorithm which relies on speculative, local assignment of colors. In this paper we present an improved version which runs even more optimistically with less thread synchronization and reduced number of conflicts compared to Catalyurek et al.'s algorithm. We show that the new technique scales better on multi-core and many-core systems and performs up to 1.5x faster than its predecessor on graphs with high-degree vertices, while keeping the number of colors at the same near-optimal levels.Comment: To appear in the proceedings of Euro Par 201

    Modelling fast forms of visual neural plasticity using a modified second-order motion energy model

    Get PDF
    The Adelson-Bergen motion energy sensor is well established as the leading model of low-level visual motion sensing in human vision. However, the standard model cannot predict adaptation effects in motion perception. A previous paper Pavan et al.(Journal of Vision 10:1-17, 2013) presented an extension to the model which uses a first-order RC gain-control circuit (leaky integrator) to implement adaptation effects which can span many seconds, and showed that the extended model's output is consistent with psychophysical data on the classic motion after-effect. Recent psychophysical research has reported adaptation over much shorter time periods, spanning just a few hundred milliseconds. The present paper further extends the sensor model to implement rapid adaptation, by adding a second-order RC circuit which causes the sensor to require a finite amount of time to react to a sudden change in stimulation. The output of the new sensor accounts accurately for psychophysical data on rapid forms of facilitation (rapid visual motion priming, rVMP) and suppression (rapid motion after-effect, rMAE). Changes in natural scene content occur over multiple time scales, and multi-stage leaky integrators of the kind proposed here offer a computational scheme for modelling adaptation over multiple time scales. © 2014 Springer Science+Business Media New York

    Modelling adaptation to directional motion using the Adelson-Bergen energy sensor

    Get PDF
    The motion energy sensor has been shown to account for a wide range of physiological and psychophysical results in motion detection and discrimination studies. It has become established as the standard computational model for retinal movement sensing in the human visual system. Adaptation effects have been extensively studied in the psychophysical literature on motion perception, and play a crucial role in theoretical debates, but the current implementation of the energy sensor does not provide directly for modelling adaptation-induced changes in output. We describe an extension of the model to incorporate changes in output due to adaptation. The extended model first computes a space-time representation of the output to a given stimulus, and then a RC gain-control circuit ("leaky integrator") is applied to the time-dependent output. The output of the extended model shows effects which mirror those observed in psychophysical studies of motion adaptation: a decline in sensor output during stimulation, and changes in the relative of outputs of different sensors following this adaptation
    corecore